Cryptographic primitives need random numbers to protect your data. Random numbers are used for generating secret keys, nonces, random paddings, initialization vectors, salts etc. Deterministic pseudorandom number generators are useful, but they still need truly random seeds generated by entropy sources in order to produce random numbers. Researchers have shown examples of deployed systems that did not have enough randomness in their entropy sources, and as a result, crypto keys were compromised. So how do you know how much entropy is in your entropy source?
Estimating entropy is a difficult (if not impossible) problem, and we’ve been working to create usable guidance that will give conservative estimates on the amount of entropy in an entropy source. We want to share some of the challenges and proposed methods. We will also talk about some new directions that we’re investigating, and present results of our estimation methods on simulated entropy sources.
The authors work within the Cryptographic Technology Group at the National Institute of Standards and Technology (NIST). Meltem is a cryptographer at NIST and holds a Ph.D. in Cryptography from Middle East Technical University.
The authors work within the Cryptographic Technology Group at the National Institute of Standards and Technology (NIST). John is an experienced cryptographer at NIST and has degrees in Computer Science and Economics from the University of Missouri Columbia.
The authors work within the Cryptographic Technology Group at the National Institute of Standards and Technology (NIST). Kerry is a computer scientist at NIST and holds a D.Sc. in Computer Science from The George Washington University.