But it's true! ;-)

> Sure there is. The metric world has no problem with using infinitely repeating decimals.

On 8/8/2016 4:34 AM, Just Wondering wrote:

I though a real number only had ONE decimal.... How many decimals do you typically see in a number?

1/3 + 1/3 + 1/3 = 1

How would you show that using equal whole numbers??? '~)

I though a real number only had ONE decimal.... How many decimals do you typically see in a number?

1/3 + 1/3 + 1/3 = 1

How would you show that using equal whole numbers??? '~)

On 8/8/2016 7:44 AM, Leon wrote:

>>>>>

> That's a different subject, but read and learn, grasshopper. https://www.mathsisfun.com/numbers/real-numbers.html

It depends on the number. Anywhere from zero on up as far as you care to go.

>>>>>

> That's a different subject, but read and learn, grasshopper. https://www.mathsisfun.com/numbers/real-numbers.html

It depends on the number. Anywhere from zero on up as far as you care to go.

On 8/8/2016 2:16 PM, Just Wondering wrote:

sorry, I only now realized you did not say repeating decimal "points" ;~)

sorry, I only now realized you did not say repeating decimal "points" ;~)

If you want great precision, a fractional system will always be better. A fraction by definition can express any rational number with perfect precision. A decimal is much more limited. For example, you cannot express 1/3 in decimal with perfect precision, at least not using a finite number of digits. When used in calculations, decimals can and will accumulate errors, and sometimes the result can be wildly different than the true answer even after just a few steps. Correctly identifying and handling accumulation of errors in floating point arithmetic (like decimal, but usually using a binary number system) is a very difficult and complex area in computer science.

The metric system is generally considered better for two reasons:

1) It is much more rigorously standardized conceptually. Every unit is, obviously, based on multiplication or division of 10. A kilogram and kilometer is a multiple of 10 (i.e. 1000) of a gram and meter, respectively. And this applies to all types of measurements--length, volume, area, mass, etc. It's much easier to translate, e.g., lengths into area as the conversion factors are typically much simpler. Even the names of units are standardized: kilo-, milli-, etc.

2) Arithmetic with decimals is also much simpler because it's similar to regular arithmetic with whole numbers. Calculating with fractions requires more book keeping. The most common calculations are pretty simple in US customary units, but in science and engineering you're often dealing with arbitrary numbers with much more precision. IOW, you're not always or even rarely juggling standard dimensions that co-evolved to work well with common sums and multiples.

That doesn't mean metric is the best possible choice. Base-10 is a really crappy multiple. It's an historical accident that we use it, and it has nothing to do with the number of fingers we have. Units of 10 cannot express very well 3rds or even 4ths. People undoubtedly still conceptualize those things as fractions even when using decimal. Arguably we'd be better off using base-12, or even base-60 like the Babylonians. 12 is evenly divisible by 2, 3, 4, and 6. 60 is divisible by 1, 2, 3, 4, 5, 6, 10, 12. People are already familiar with base-12 and base-60 units as it's how we count time. And some other familiar measurements loosely (but inconsistently) utilize those units. (We still express those units using decimal notation, though, which can be confusing.)

Computer science uses base-2 (binary) because of similar properties related to how arithmetic works. Base-8 (octal) and especially Base-16 (hexadecimal) is common in software. The former makes it more intuitive to work with groups of 3 in the context of binary numbers, while the latter makes it more intuitive to work in groups of 2, 4, and 8. You can get used to thinking in different bases fairly easily. I think math would come easier for many young kids if they practiced using different number systems explicitly. I never really "got" English grammar (beyond rote memorization) until I began learning Spanish in high school. Spanish class did more to help me understand English grammar than any English class ever did.

Good post, well thought out and presented

I thought, though, that CS used base-2 because that was what the hardware did. As I understand transistors and TTL, they natively work with the presence or absence of a voltage, which lends itself to base-2.

Puckdropper

Puckdropper wrote:

Base 8 and 16 are "shorthand" for base 2. e.g. 14 (Decimal) = 1110 (base 2) = 16 (base 8) = E (base 16).

People don't like 0s and 1s, but computers do, so we have software that translates between various bases. Of course 14 need not just be an numeric value, it could also represent an instruction which tells a computer to increment a register, or to do some other thing.

Bill

Base 8 and 16 are "shorthand" for base 2. e.g. 14 (Decimal) = 1110 (base 2) = 16 (base 8) = E (base 16).

People don't like 0s and 1s, but computers do, so we have software that translates between various bases. Of course 14 need not just be an numeric value, it could also represent an instruction which tells a computer to increment a register, or to do some other thing.

Bill

Forgot a popular number base - Icono hexadecimal Base 26.
Used the alphabet and numbers. This was for large 64 bit and 128 bit
parallel processors for the military and used in the 360 by some.

Last I heard, the military division of IBM was closed down. Times change.

Martin

On 9/14/2016 7:37 PM, Bill wrote:

Last I heard, the military division of IBM was closed down. Times change.

Martin

On 9/14/2016 7:37 PM, Bill wrote:

wrote:

wrote:

Shorthand in that it's easier to make binary machines but base-16 is just as much of a base as base-15 is (though neither, in that context are particularly useful except as learning tools).

Note that floating point often uses base-16 arithmetic. Base-2 is just a representation of the base-16.

Shorthand in that it's easier to make binary machines but base-16 is just as much of a base as base-15 is (though neither, in that context are particularly useful except as learning tools).

Note that floating point often uses base-16 arithmetic. Base-2 is just a representation of the base-16.

krw wrote:

There are shortcuts with base 8 and 16 since they are powers of 2. base 15 wouldn't be useful in this context.

There are shortcuts with base 8 and 16 since they are powers of 2. base 15 wouldn't be useful in this context.

Bill wrote:

For instance, if you wish to write down a 32 bit string, 8 hex digits is the "nicest way to do it, for a person.

For instance, if you wish to write down a 32 bit string, 8 hex digits is the "nicest way to do it, for a person.

And if you have 12-bit systems, 3 octal digits were the nicest way to do it (e.g. PDP-8, PDP-12).

For BCD machines, decimal rules. Makes it very easy to read core dumps.

### free logs

- - previous thread in Woodworking Forum

### A new and different job

- - newest thread in Woodworking Forum

### OT: US intel has intercepts of Russian ambassador reporting Sessions meetings back to...

- - the site's newest thread. Posted in Home Repair