@Barkydog–our present computers have enough problems representing two states reliably when binary is used. All sorts of error detection and correction codes are utilized to detect when a zero may have changed to a one or a one to a zero. We would have to have very stable electrical circuits to quickly and reliably operate in ten states.
I often asked my beginning computer science classes why we use a base 10 system rather than another base. If I didn’t get a response, I would then make the comment “Suppose man had developed the rotary power mower before he developed the numeration system. We would probably be using a base 9 system. This is the reason manufacturers had to put the deadman controls to stop the blade when you release the handle on modern lawnmowers. Too many people stuck their fingers under the mower deck to see if the blades were turning”. We developed the base 10 system because we have 10 fingers.
@triedaq yes I understand the advantages of the base 2 system, but think a base 10 would exponentially increase data access, sure it is an inconceivable goal at this point but hate to dismiss it. Sure it is based on the number of fingers we have, but binary data has a limited future imhop.
@Barkydog: I think you’re confusing the number of bits a processor can handle simultaneously (or the data and address bus width if you prefer) with arithmetic. If you could figure out a way to make a transistor handle more states than off and on reliably, and detect them without errors, then you could build a base-10 digital computer. But there really wouldn’t be much point in my opinion. Everything can be represented in base-2, and the extra complexity would offset any gains.
I think all the gains you’re expecting with base-10 are already being accomplished by multiplexing multiple signals into one path.
WIKI It is a confusing nomenclature @oblivion Some of the first microprocessors had a 4-bit word length and were developed around 1970. The TMS 1000, the world’s first single-chip microprocessor, was a 4-bit CPU; it had a Harvard architecture, with an on-chip instruction ROM with 8-bit-wide instructions and an on-chip data RAM with 4-bit words.[1] The first commercial microprocessor was the binary coded decimal (BCD-based) Intel 4004,[2][3] developed for calculator applications in 1971; it had a 4-bit word length, but had 8-bit instructions and 12-bit addresses. I am sorry to digress, but 0 and 1 will be old school someday.
The problem is, how are you going to implement it in the electronics? And what good will it actually do you in adding it? For example, suppose that instead of binary, we used base-4. That would add 2 more states that you could do with one wire. While theoretically it would make more instructions possible and faster communications, how is this any better than just adding one more line in parallel to handle the next digit of binary? True it takes up less physical room, but that was my point of multiplexing multiple signals on one wire. And you increase the complexity of the decoders/encoders to handle it. If microprocessors worked on base-10, you could have a much expanded instruction set I’d think… but you already have more instructions than programmers (or compilers) can efficiently use with a CISC processor. MAYBE, just maybe you could end up with more efficiency for a given clock speed if the processor could digest a larger number of states, but that brings us back to how to implement it in the switching electronics…
Then there’s the boolean logic you’d need to use to handle the additional states… how many more would you need to add to the existing AND OR NOT XOR, etc. to compare numbers/signals in base-10 and make full use of it?
Your idea has some merit though… in quantum computing theoretically you could be using “trinary” to represent the different states of the system. Though QC’s gains don’t come from the additional state but from the nearly magic simultaneous nature of it.
I guess to really wander off target, our human genome is encoded in 4 bases, or “quaternary”? digital code. Perhaps if it was binary instead with a better “checksum” mechanism, no one would get cancer. Though I suppose it’s done pretty well for all the operations that it’s performed over millions of years and most of us aren’t deranged mutants. Thank God (no pun intended) that Microsoft wasn’t involved in its design.
@barkydog all of those microprocessors that you mentioned still used binary. 4 bit in binary would be 4 on/off switches in a row that forms a word. Where it seems you’re getting confused is in the number of bits - those bits are still composed of on/off switches and are therefore still in binary - it’s just that, for example, a 64 bit processor looks at longer binary “words” than a 32 bit processor does, and therefore can process more information per word.
Binary isn’t going anywhere any time soon because as Triedaq hinted at, if you want to make a non-binary chip, you have to have it read variances in voltages as information, and controlling that voltage to varying outputs is much harder than “if the voltage is at or near 0, it’s a 0, and if it’s at or near 5v, it’s a 1.” As he said, even that scheme requires a lot of error correction, so imagine how much we’d have to do for a base-10 computer in which each half-volt would be a different number. You’d end up going slower than the binary computer.
In short, unless the pipe-dream of quantum computing comes to fruition, binary is here to stay.