Hmm.. The reason calculators don't use binary is because the translation
from decimal to binary to do a calculation, and then convert to decimal
output again is generally less efficient than using BCD. Just saying...
As I said, you're wrong. Conversion is a trivial matter (modulo
divide by "10" and post the answer to the display - repeat). The
problem is adding 1/3 + 2/3. People understand that .3333333333 is
1/3 and .666666666 is 2/3 but they don't like the answer to be
.999999999. The logic to make it "right" in every case wasn't trivial
for early calculators.
If we'd simply stop counting our thumbs and use them as status bits
instead, binary would come a whole lot more naturally. Teach your kids to
count properly: One, two, three, four, overflow, sign, five, six, seven,
The status bits might need a bit more thought.
How about using them for hexadecimal. It might take some Vulcan
Status bits seem pretty simple, at least for your base-8. Increment
right to left, decrement left to right. Overflow then becomes the
count after either pinkie (pinkie and ring change together) and sign
becomes a decrement or increment past zero.
Only on the old BCD mainframes was any form of base-16 used,
and there, values of A-F weren't available for use (they were
called undigits on the Burroughs systems and would cause a fault
if used in an arithmetic operation - integer, fixed-point or
All modern processors do arithmetic in binary[*]. Don't confuse
the storage format with the human representation of the storage when
[*] IBM's Power processors also support decimal floating point (a la BCD).
Huh? How can you have base-16 arithmetic and not use A-F? That
paragraph makes no sense.
It depends on how you look at it. The hardware uses binary logic,
sure, but the arithmetic is purely hexadecimal. Normalization is done
in hexadecimal digits and the "binary point" is actually a
"hexadecimal point", for instance.
Puckdropper <puckdropper(at)yahoo(dot)com> wrote in
You are exactly correct - computers use base 2 because of
the hardware, it has nothing to do with arithmetic (some
early computers used base 3, which is easy to implement in
an analog computer and does make some arithmetic easier).
Programmers use hex (base 16) because it's easier than a
whole bunch of 1s and 0s. Experienced programmers can do
basic math in hex in their head, whereas no-one can do math
in their head with binary numbers bigger than a few digits
(other than multiply/divide by 2, of course).
In most of life, close enough is, well, close enough.
But no one can decide what the base unit should be. Some like microns
(micrometers), others use angstroms. That's just the tip of the
When I'm measuring, I avoid the bookkeeping by deciding on my result
ion and then calculate using just the numerator. For instance, if
1/32" is "good enough", I don't use 1/2" or 1/4", rather 16(/32) or
They taught us arithmetic in different bases, up to base-32 (and, of
course conversion between them) in fifth and sixth grade.
Those fractions are probably due to conversion from Imperial Measure.
I've seen analogous measurements in cookbooks for the US market where
they have obviously converted metric to imperial weights and
measurements. For example, I've seen a recipe asking for 1.76oz instead
of the original 50g.
Honestly, metric is MUCH easier if you work in it from scratch.
That would be a logical explanation but they the Leigh Jig and the
slides are manufactured in a metric country and the slide have
measurements that are clearly even number mm's and are made to the 35mm
system. And the measuring of the holes on the slides don't really need
to be any specific measurement at all.
No, that's not it. That is how to up grade the fingers of your DT jig to a
later version past the D4. You can buy the new set of fingers that are
identical in size and shape, except for the extra holes in the set or you
can make yours the same by drilling the hikes in those odd sizes and
That was my first thought, because that's a common problem.
The examples Leon gives don't seem to translate to any
sensible fraction of an Imperial unit. 3.57mm isn't one of
the letter/number system of drill sizes either, altho it's
a little bigger than a #28.
Possibly the odd values are accumulated rounding error, due
to going metric to imperial and back to metric.
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.