lustre: why does cfs_get_random_bytes() exist?

Theodore Ts'o tytso at mit.edu
Thu Oct 3 16:39:08 UTC 2013


I've been auditing uses of get_random_bytes() since there are places
where get_random_bytes() is getting used where something weaker, such
as prandom_u32() is quite sufficient.  Basically, if kernel code just
needs a random number which does not have any cryptographic
requirements (such as in ext[234]. which gets the new block group used
for inode allocations using get_random_bytes), then prandom_u32()
should be used instead of get_random_bytes() to save CPU overhead and
to reduce the drain on the /dev/urandom's entropy pool.

Typically, the reason for this is either for historical reasons, since
prandom_u32() hadn't existed when the code was written, or because
historical code was cut and pasted into newer code.

When I came across staging/lustre/lustre/libcfs/prng.c, I saw
something which is **really** weird.  It defines a cfs_rand() which is
functionally identical to prandom_u32().  More puzzlingly, it also
defines cfs_get_random_bytes() which calls get_random_bytes() and then
xor's the result with cfs_rand().  That last step has no cryptographic
effect, so I'm really wondering who thought this as a good idea and/or
necessary.

What I think should happen is that staging/lustre/lustre/libcfs/prng.c
should be removed, and calls to cfs_rand() should get replaced
prandom_u32(), and cfs_get_random_bytes() should get replaced with
get_random_bytes().

Does this sound reasonable?

Cheers,

						- Ted


More information about the devel mailing list