Now, if you wish to convert a random byte (0..255) into a digit (0..9) you can do like this:

- You combine the digits into the maximum that your input can generate (99)

- You discard the rest of the range (100..255)

You can discard less if you have 10 bit input and can generate 3 digits (0..999 and discard only 1000..1023).

This is how to solve for a optimum solution:

- Your input is (0..255) so, in bits it is log (256)/log(2) = 8 (any log will do)

- Your range is log (10)/log(2) = 3.32

- The ratio of a digit to a byte is: 3.32/8=0.415

By adding (to itself) the number 0.415 we see how much we can use of the input, and how large the loss is:

Digits Fill Number of bytes
1 0.415 1
2 0.830 1
3 1.246 2
4 1.661 2
5 2.076 3
6 2.491 3
7 2.907 3
8 3.322 4
9 3.373 4
10 4.152 5
11 4.568 5
12 4.983 5

Since the input comes in bytes, using 3 bytes and convert into 7 digits produce a loss of (3.000-2.907)/7=0.013 bytes for each output digit. This obviously get better if we use larger numbers. If you use 5 bytes and generate 12 digits, the loss is only 0.0014 bytes/digit.

A compromise is usually easy to find.