I don’t have a good name, so I don’t think there are too many people watching it.
Let’s begin, last January, I attended the meet-up of viet openinfra . Here I share my experience of moving Viettel to the cloud. There is a case they share about whether the java application starts up in the cloud slowly, but doesn’t take up CPU resources or full ram. Say always the reason is due to the Linux random secure. Then I also read more about this issue.
Brief Introduction to Entropy and Randomness
Linux pseudo random number generator (PRNG), the part of linux that handles random interruptions based on mouse, keyboard, network … This component plays an important role in encryption systems like SSL / TLS. like many other applications.
When Entropy Pools Run Dry
On Linux, there are 2 most common random generators:
/dev/random but it is blocking. That is, you will have to wait until the entropy is good enough to generate randomness. But for many systems running on the cloud, there’s no such thing as an interrupt to generate random. As a result, some applications have to wait until they can randomize to run again. When the application is shaken and executed, this does not interfere much, but when the application is started, it may take a long time.
The Userland Solution for Populating Entropy Pools
Now the solution, actually here the problem is quite obvious, so I just want to write: “you probably know how to use google” but writing that sure GL will reject my post so I will introduce a bit.
When entropy is not enough, of course the solution is complementary. If there is not enough interrupt, it will come from another source such as a video card or sound card. Of course, you do not have to code this already, you can use the haveged package on linux, how to install it quickly for google but copy and translate a few commands on the network seems to not make sense. Haveged to use random from the variables being processed by cpu. This raises some concerns as to whether it is truly random or not. See the following FIPS test section for results.
Testing Availability of Entropy & Quality of Random Data
Well, surely you have no doubt about the efficiency and safety of the packages on linux. But let’s try it out. For this test I will use the FIPS-140 method used by rngtest provided in the rng-tools package:
# cat /dev/random | rngtest -c 1000
And you should see the output almost:
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
rngtest: starting FIPS tests...
rngtest: bits received from input: 20000032
rngtest: FIPS 140-2 successes: 999
rngtest: FIPS 140-2 failures: 1
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 1
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=1.139; avg=22.274; max=19073.486)Mibits/s
rngtest: FIPS tests speed: (min=19.827; avg=110.859; max=115.597)Mibits/s
rngtest: Program run time: 1028784 microseconds
998-1000 successes looks good, but for testing the amount of available entropy, you can use the following command:
# cat /proc/sys/kernel/random/entropy_avail
Basically, you need this number to be greater than 1000.