JavaScript is required!
You cannot use this page without JavaScript.

Thoughts on numbering systems

Weblog | Thoughts on numbering systems | Published: January 03, 2019 09:50:36

The evolution of human numbering systems is an interesting one. First, the struggle to come up with a counting base. Then you discover you’ve 10 fingers! And so, you can use “lots of 10”. Then there’s another problem: you need a symbol for each possible count/quantity, for example 0,1, 2, 3, etc. Now that you can count things like your money correctly, it all comes so naturally, huh?

I think it must have been easier to describe binary system in computing later. A bit, like a switch can only take one of two — on or off, 1 or 0. How about the other common digital numbering systems: hexadecimal and octal? Why were/are they common in computer usage? Today, hexadecimal for instance is used to represent memory addresses and data. There’s a whole lot of technical explanation on why say, hexadecimal is more practical that decimal, but I won’t delve into that.

Short answer though, humans (or programmers?) are lazy. In the case of binary, they wouldn’t want to deal with long strings of numbers. If you look carefully, you’ll notice there is a direct relation between the binary system and the other two, in that 3 bits correspond to 1 octal digit and 4 bits to 1 hex digit so there’s the element of convertibility. Thus, the two are more compact, convenient and less error prone for humans.

One final thought, even when your data or programs are represented in hexadecimal form, remember it ultimately must be converted to binary form because computers can only execute binary code.

PS: It turns out there’s a history behind all these. Here’s an interesting answer from Tolga (and a heated debate) to a related question on ResearchGate.