Page 6 of 11
On Apr 13, 1:33 am, Puckdropper <puckdropper(at)yahoo(dot)com> wrote:

Not true. The (add and subtract) operations use the same logic. Now, multiply and divide are a whole different kettle...

Really? I've -never- seen an IC chip that did subtraction directly. 'Adder' chips, however, are common as dirt.

You can -accomplish- subtraction using an 'adder' and a bunch of inverters on the second input (and ignore the overflow).

True 'subtract' logic__ _is_ __more complicated -- because the states in the
operation table do not collapse as well.
Addition: operand1 OR operand2 == 0 => zero result, zero carry
operand1 XOR operand2 == 1 => one result, zero carry
operand1 AND operand2 == 1 =? zero result, one carry

Subtraction: operand1 EQ operand2 => zero result, zero borrow operand1 EQ 1 AND operand2 EQ 0 => one result, zero borrow operand1 EQ 0 AND operand2 EQ 1 => one result, one borrow

To expound on the 'difference' between addition and subtraction, consider hardware that uses "ONES COMPLEMENT" arithmetic. Where the 'negative' of a number is represented by simply inverting all the bits of the positive value. e.g. the negative of "00000010" is "11111101".

Note well that in__ _THIS_ __number representation scheme there are ***TWO*** bit-
values that evaluate to -zero-. "0000000' is 'positive zero, and
'11111111' is 'negative zero'.

It is***HIGHLY***DESIRABLE* that numeric computations which give a "zero"
result, have the bit-pattern of 'positive zero'. If you 'subtract'
'00000011' from '00000011' by 'complement and add', you get
'00000011'
+'11111100'
========== '11111111' which is 'negative zero'

if you do it by 'actual' subtraction '00000011' -'00000011' ========== '00000000' which is 'positive zero', the desired result

To get the 'desired result' of 'positive zero', using__ _adder_ __circuitry,
one has to have an additional stage that examines -every- result for the
'negative zero' bit-pattern, and inverts all the bits.

The 'does addition by complement and subtract' was***NOT*** unique to the
CDC machines. ***every*** machine that used "1's complement" arithmetic
internally did things the same way.

There are advantages to "1's complement" over "2's complement", notably__
_all_ __numbers have a positive and negative representation. (In 2's
complement math, it is ***NOT***POSSIBLE*** to represent the complement of the
'largest possible negative negative number'. you**__ _can_ __have '-2****n' but
only '+((2****n)-1)'. The disadvantage is that there are -two- values for
'zero'. But that's just 'nothing'. <grin>

On the other side of the fence, there__ _are_ __advantages to "2's complement",
notably that all numbers have a single__ _unique_ __representation. The
disadvantages are that there =is= a negative value that you cannot
represent as a positive number. And 2's complement math__ _IS_ __just a
little bit slower -- by one gate time -- than 1's complement. As
processor speeds became faster, that 'one gate time' difference became
less significant, and the world settled on__ _not_ __dealing with "+/- zero".

Are you showing off that the information is useless to YOU because your prowess is so elevated?

Idiot. A particular method of encoding negative numbers isn't relevant when discussing the difference/similarity between subtraction and addition. I wouldn't expect you to know anything about it. OTOH, you are up to your usual standards in cashing checks with your mouth that you ass can't cover.

Only if you're doing invert-add to perform subtraction. The topic was specifically about hardware subtraction (or addition = complement- subtract),***not*** invert-add.

Above you use 2's complement representations in your example. Now you switch tracks to 1's complement representation of negative numbers (the only format where negation = inversion). Yes, bitwise***INVERSION*** can be done by a single transistor (indeed it
takes zero clock cycles to invert a signal), but this is a negation
only if you're doing 1's complement arithmetic. You still have to do
the end-around carry for the addition (two carry propagations). OTOH,
if you're doing 2's complement arithmetic negation is invert AND ADD
ONE, which also takes two carry propagation periods (complement, add).
TANSTAAFL.

Above you use 2's complement representations in your example. Now you switch tracks to 1's complement representation of negative numbers (the only format where negation = inversion). Yes, bitwise***INVERSION*** can be done by a single transistor (indeed it
takes zero clock cycles to invert a signal), but this is a negation
only if you're doing 1's complement arithmetic. You still have to do...

That must be one of the reasons they switched to 2s complement, no?

I hate to answer my own question, but the main reason was the duplicity of zeros in 1s complement, I think.

Bill

Mainly, but the uncertainty of the wrap-around-carry doesn't help. I mot sure whether some of the fancier adders (carry look-ahead, carry save, etc.) work well for 1's complement, either (again, the wrap-around issue). Your observation on the two zeros is spot on, however. That takes an additional operation in the critical path of most calculations.

The 'ambiguous' bit-pattern for 'zero'***was***__ _the_ __compelling reason that
IEEE 'standardized' on 2's complement. The 'test for zero/non-zero' operation
had to check for__ _two_ __bit patterns (all zeroes, all ones), which either
took twice as long as a single check, ***or*** used up a__ _lot_ __more 'silicon
real-estate'. Even__ _worse_,__ a test for "equality" could not simply check
for a bit-for-bit correspondence between the two values, it had to return
'equal' IF one value was all zeroes, and the other was all ones. This
was__ _really_ __"bad news" for limited-capability processors -- you had to
invert one operand, SUBTRACT, and__ _then_ __perform the zero/non-zero check
described above. Suddenly the test for 'equal' is 3 gate times ***SLOWER***
than a 'subtract'. This__ _really_ __hurts performance. "Inequality" compares
are also adversely affected, although not to the same degree.

For***big***, 'maximum performance' machines, the cost of the additional
hardware for dealing with unique +0/-0 was small enough (relative to the__
_total_ __cost of the machine) that it was easy to justify for the performance
benefits. When the IEEE stuck it's oar in, 'budget' computing was a fact
of life -- mini-computers, and micro-processors. It was _important to the
***user*** of computing that the results on 'cheap' hardware match ***exactly***
that obtained from using the 'high priced spread'. And that__ _code_ __developed
on one machine run ***unchanged*** on another machine, and produce exactly
the same results.

At the vehement urging of the makers of 'budget' computing systems, as well as the users thereof, 2's complement arithmetic was selected for the IEEE standard,***despite*** the obvious problem of a__ _non-symmetric_ __representation
scheme. Number 'comparisons' were much more common in existing code than
'negations', thus it 'made sense' to use a representation scheme that favored
the 'more common' operations. In addition, the 'minor problem' of the 'most
negative number' not having a positive counterpart was not perceived to be
a 'killer' issue. "Real-world" data showed that only in ***VERY***RARE*
situations did numeric values in computations get 'close' to the 'limit of
representation' in hardware.

I***understand*** the decision, although, still to this day, I disagree with it.
<wry grin>

The "standard' algorithm, in computer__ _hardware_ __for getting the
2s complement 'negative' of a value is to invert the bits (1's complement
negative) and _ADD_1_ to that value. This algorithm, as does ***any***
other possible algorithm, ***fails*** when the value to be negated is the
'most negative value' that can be represented on the machine.`

Yup. on a 2's complement machine, "NOT" (1's complement negate) is***much***
gaster than "NEG" (2's complement negate). On a 1's complement machine,
the opcodes are synonyms (if the latter code even__ _exists_,__ that is).

Yup, you're one of 'them' alright. You ain't much.

*=> zero result, zero borrow*

*=> one result, zero borrow*

*=> one result, one borrow*

What a moron, RoboTwat, but I already knew that.

*m> wrote:*

*=> one result, zero carry*

*=> zero result, zero borrow*

*=> one result, zero borrow*

*=> one result, one borrow*

Yup, you're right to the core. I find it interesting that the bulk of your comments are degrading and condescending to justabout anybody here. You must really think you're something. Welll... I'm here to tell you that you are not nice.

*ot)com> wrote:*

*=> zero result, zero carry*

I know you're stupid. You don't have to constantly demonstrate the fact.

#### Site Timeline

- posted on April 13, 2010, 12:35 pm

Not true. The (add and subtract) operations use the same logic. Now, multiply and divide are a whole different kettle...

- posted on April 13, 2010, 9:40 pm

Really? I've -never- seen an IC chip that did subtraction directly. 'Adder' chips, however, are common as dirt.

You can -accomplish- subtraction using an 'adder' and a bunch of inverters on the second input (and ignore the overflow).

True 'subtract' logic

Subtraction: operand1 EQ operand2 => zero result, zero borrow operand1 EQ 1 AND operand2 EQ 0 => one result, zero borrow operand1 EQ 0 AND operand2 EQ 1 => one result, one borrow

To expound on the 'difference' between addition and subtraction, consider hardware that uses "ONES COMPLEMENT" arithmetic. Where the 'negative' of a number is represented by simply inverting all the bits of the positive value. e.g. the negative of "00000010" is "11111101".

Note well that in

It is

if you do it by 'actual' subtraction '00000011' -'00000011' ========== '00000000' which is 'positive zero', the desired result

To get the 'desired result' of 'positive zero', using

The 'does addition by complement and subtract' was

There are advantages to "1's complement" over "2's complement", notably

On the other side of the fence, there

- posted on April 14, 2010, 1:36 pm

On Apr 13, 4:40 pm, snipped-for-privacy@host122.r-bonomi.com (Robert Bonomi) wrote:

Really. Really? You haven't looked very hard. http://www.onsemi.com/pub_link/Collateral/MC10H180-D.PDF

...which are the same operations.

So what? Are you trying to prove your prowess with useless information?

<snipped useless '1's complement stuff>

Really. Really? You haven't looked very hard. http://www.onsemi.com/pub_link/Collateral/MC10H180-D.PDF

...which are the same operations.

So what? Are you trying to prove your prowess with useless information?

<snipped useless '1's complement stuff>

- posted on April 14, 2010, 1:56 pm

Are you showing off that the information is useless to YOU because your prowess is so elevated?

- posted on April 14, 2010, 3:46 pm

Idiot. A particular method of encoding negative numbers isn't relevant when discussing the difference/similarity between subtraction and addition. I wouldn't expect you to know anything about it. OTOH, you are up to your usual standards in cashing checks with your mouth that you ass can't cover.

- posted on April 14, 2010, 5:05 pm

Idiot. A particular method of encoding negative numbers isn't
relevant when

FWIW, and I won't wish to get dragged into any muck, the algorithm for performing of translating to negative numbers (assuming 2s complement representation) IS relevant if one will evaluate expressions of the form A-B as A+(-B).

Since bitwise negation can be performed by a single transistor, I would expect that that a value in a register could be negated VERY fast. I think just a few clock cycles.

Bill

FWIW, and I won't wish to get dragged into any muck, the algorithm for performing of translating to negative numbers (assuming 2s complement representation) IS relevant if one will evaluate expressions of the form A-B as A+(-B).

Since bitwise negation can be performed by a single transistor, I would expect that that a value in a register could be negated VERY fast. I think just a few clock cycles.

Bill

- posted on April 14, 2010, 6:02 pm

Only if you're doing invert-add to perform subtraction. The topic was specifically about hardware subtraction (or addition = complement- subtract),

Above you use 2's complement representations in your example. Now you switch tracks to 1's complement representation of negative numbers (the only format where negation = inversion). Yes, bitwise

- posted on April 14, 2010, 11:10 pm

Above you use 2's complement representations in your example. Now you switch tracks to 1's complement representation of negative numbers (the only format where negation = inversion). Yes, bitwise

That must be one of the reasons they switched to 2s complement, no?

- posted on April 14, 2010, 11:16 pm

I hate to answer my own question, but the main reason was the duplicity of zeros in 1s complement, I think.

Bill

- posted on April 15, 2010, 3:32 am

Mainly, but the uncertainty of the wrap-around-carry doesn't help. I mot sure whether some of the fancier adders (carry look-ahead, carry save, etc.) work well for 1's complement, either (again, the wrap-around issue). Your observation on the two zeros is spot on, however. That takes an additional operation in the critical path of most calculations.

- posted on April 15, 2010, 10:40 pm

The 'ambiguous' bit-pattern for 'zero'

For

At the vehement urging of the makers of 'budget' computing systems, as well as the users thereof, 2's complement arithmetic was selected for the IEEE standard,

I

- posted on April 16, 2010, 12:38 am

data showed that only in ***VERY***RARE*

I think some would say that if your variables are getting close to their limits very often, then it's time to consider looking for a new data structure.

I think some would say that if your variables are getting close to their limits very often, then it's time to consider looking for a new data structure.

- posted on April 15, 2010, 10:11 pm

The "standard' algorithm, in computer

Yup. on a 2's complement machine, "NOT" (1's complement negate) is

- posted on April 14, 2010, 8:21 pm

Yup, you're one of 'them' alright. You ain't much.

- posted on April 14, 2010, 8:30 pm

What a moron, RoboTwat, but I already knew that.

- posted on April 14, 2010, 9:24 pm

Yup, you're right to the core. I find it interesting that the bulk of your comments are degrading and condescending to justabout anybody here. You must really think you're something. Welll... I'm here to tell you that you are not nice.

- posted on April 15, 2010, 3:33 am

wrote:

Better than being a leftist loser, so yes, I am right.

I don't give a flying fuck what***you*** think. You, in particular. Moron.

Better than being a leftist loser, so yes, I am right.

I don't give a flying fuck what

- posted on April 15, 2010, 12:07 pm

On Apr 14, 11:33 pm, " snipped-for-privacy@att.bizzzzzzzzzzzz"

*)com> wrote:*

* => one result, zero carry*

* => zero result, zero borrow*

Oh, I see... a liberal.

Oh, I see... a liberal.

- posted on April 15, 2010, 1:29 pm

I know you're stupid. You don't have to constantly demonstrate the fact.

- posted on April 15, 2010, 1:39 pm

On Thu, 15 Apr 2010 06:29:11 -0700 (PDT), " snipped-for-privacy@gmail.com"

You mean in the same way you demonstrate how to be a red neck?

You mean in the same way you demonstrate how to be a red neck?

- OT: Robatoy Final Farewell
- - next thread in Woodworking Forum

- Re: Obtaining or making 1/2" pine T&G
- - previous thread in Woodworking Forum

- bandsaw blades and more
- - newest thread in Woodworking Forum

- Woodpeckers isn't alone.
- - last updated thread in Woodworking Forum

- Defective Toilets
- - the site's newest thread. Posted in Home Repair

- DIT Storage heaters
- - the site's last updated thread. Posted in UK Do-It-Yourself Forum