[Whonix-devel] Test results of Jitter RNG

Stephan Mueller smueller at chronox.de
Fri Apr 26 23:14:28 CEST 2019


Am Freitag, 26. April 2019, 15:59:39 CEST schrieb Stephan Mueller:

Hi,

A system called TNT BOM BOM sent me several test results. I am not sure who to 
reply to the analysis. Therefore, I will reply to this thread.

The first test "Test-Results" show that the heuristic validating whether the 
underlying platform is sufficient for the Jitter RNG has detected no 
insufficiency during 10000 test runs. Check.

The file foldtime.O0 contains test results for the non-optimized binary code 
that is the basis for the Jitter RNG. To understand what it shows, we have to 
understand what the Jitter RNG really does: it simply measures the execution 
time of a fixed code fragment. The test does the same, i.e. it measures what 
the Jitter RNG would measure. Each time delta is simply recorded.

Each time delta is expected to contribute entropy to the entropy pool. But how 
much? We can use the SP800-90B tool set provided by NIST at [1]. This tool, 
however, can only process input data with a window size of a few bits at most. 
Thus, we take the 4 LSB of each time delta, hoping that they contain already 
sufficient entropy. 

Using the tool [1], we get the following output:

Symbol alphabet consists of 16 unique symbols

Running non-IID tests...

Running Most Common Value Estimate...
        Most Common Value Estimate (bit string) = 0.985991 / 1 bit(s)

Running Entropic Statistic Estimates (bit strings only)...
        Collision Test Estimate (bit string) = 0.904492 / 1 bit(s)
        Markov Test Estimate (bit string) = 0.993746 / 1 bit(s)
        Compression Test Estimate (bit string) = 0.718504 / 1 bit(s)

Running Tuple Estimates...
        T-Tuple Test Estimate (bit string) = 0.924750 / 1 bit(s)
        LRS Test Estimate (bit string) = 0.897582 / 1 bit(s)

Running Predictor Estimates...
        Multi Most Common in Window (MultiMCW) Prediction Test Estimate (bit 
string) = 0.997074 / 1 bit(s)
        Lag Prediction Test Estimate (bit string) = 0.995814 / 1 bit(s)
        Multi Markov Model with Counting (MultiMMC) Prediction Test Estimate 
(bit string) = 0.988593 / 1 bit(s)
        LZ78Y Prediction Test Estimate (bit string) = 0.987123 / 1 bit(s)

h': 0.718504

The last line is the key: it contains the minimum entropy in one bit of the 4 
bit snapshot

- we have 0.7185 bits of entropy per data bit

- as we analyzed 4 bits of each time delta, we get 4 * 0.7185 = 2.874 bits of 
entropy per four bit time delta

- assuming the worst case that all other bits in the time delta have no 
entropy, we have 2.874 bits of entropy per time delta

- the Jitter RNG gathers 64 time deltas for returning 64 bits of random data 
and it uses an LFSR with a primitive and irreducible polynomial which is 
entropy preserving. Thus, the Jitter RNG collected 64 * 2.874 = 183.936 bits 
of entropy for its 64 bit output.

- as the Jitter RNG maintains a 64 bit entropy pool, its entropy content 
cannot be larger than the pool itself. Thus, the entropy content in the pool 
after collecting 64 time deltas is max(64 bits, 183.936 bits) = 64 bits

This implies that the Jitter RNG data has (close to) 64 bits of entropy per 
data bit.

Bottom line: When the Jitter RNG injects 64 bits of data into the Linux /dev/
random via the IOCTL, it is appropriate that the entropy estimator increases 
by 64 bits.

Bottom line: From my perspective, I see no issue in using the Jitter RNG as a 
noise source in your environments.


Note, applying the Shannon-Entropy formula to the data, we will get much 
higher entropy values.

Note II: This assessment complies with the entropy assessments to be done for 
a NIST FIP 140-2 validation compliant to FIPS 140-2 IG 7.15 

[1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

Ciao
Stephan




More information about the Whonix-devel mailing list