Some of the early computers that used decimal system (either exclusively or in combination with binary) include ENIAC, UNIVAC I and II, and several IBM computers. Some processors today use binary coded decimals which encode every decimal digit (or group of digits) with a fixed number of binary digits (bits). That being said, your question stands, why do modern computers almost exclusively use binary numbers?

Radix economy tells us that the most efficient base for storing information is base `e approx 2.71828` and indeed ternary base (`e` rounded to the nearest whole number) has some advantages when used in computers e.g. ternary search trees; however, ternary computers are more difficult to manufacture, and they have been shown to have higher energy consumption. Despite to all of this, Donald Knuth conjectured that in future we will switch to ternary computers due to their efficiency and elegance, but almost 40 years has passed since then with no ternary computers in sight.

On the other hand, decimal computers are far more error prone and more difficult to produce than binary ones. The main reason is that the binary computers use only two states "on" (maximum voltage) and "off" (no voltage). If, for example, you make 1V to be your maximum voltage, then "0" would be represented by no voltage and "9" by 1V, the problem is that you still have to assign 8 different values for intermediate voltages. This means that small change in voltage could easily turn one digit into another e.g. "4" into "5". This only works with computers using relatively high voltages like those early computers mentioned at the beginning.

In conclusion, binary computers are the simplest to manufacture, use less energy and are less error prone than non-binary computers.